Convergence of an Online Gradient Method for BP Neural Networks with Stochastic Inputs
نویسندگان
چکیده
An online gradient method for BP neural networks is presented and discussed. The input training examples are permuted stochastically in each cycle of iteration. A monotonicity and a weak convergence of deterministic nature for the method are proved.
منابع مشابه
A conjugate gradient based method for Decision Neural Network training
Decision Neural Network is a new approach for solving multi-objective decision-making problems based on artificial neural networks. Using inaccurate evaluation data, network training has improved and the number of educational data sets has decreased. The available training method is based on the gradient decent method (BP). One of its limitations is related to its convergence speed. Therefore,...
متن کاملGlobal Learning of Neural Networks by Using Hybrid Optimization Algorithm
This paper proposes a global learning of neural networks by hybrid optimization algorithm. The hybrid algorithm combines a stochastic approximation with a gradient descent. The stochastic approximation is first applied for estimating an approximation point inclined toward a global escaping from a local minimum, and then the backpropagation(BP) algorithm is applied for high-speed convergence as ...
متن کاملConvergence of Online Gradient Method with a Penalty Term for Feedforward Neural Networks with Stochastic
Abstract Online gradient algorithm has been widely used as a learning algorithm for feedforward neural network training. In this paper, we prove a weak convergence theorem of an online gradient algorithm with a penalty term, assuming that the training examples are input in a stochastic way. The monotonicity of the error function in the iteration and the boundedness of the weight are both guaran...
متن کاملBackpropagation Convergence via Deterministic Nonmonotone Perturbed Minimization
The fundamental backpropagation (BP) algorithm for training artificial neural networks is cast as a deterministic nonmonotone perturbed gradient method. Under certain natural assumptions, such as the series of learning rates diverging while the series of their squares converging, it is established that every accumulation point of the online BP iterates is a stationary point of the BP error func...
متن کاملA Higher Order Online Lyapunov-Based Emotional Learning for Rough-Neural Identifiers
o enhance the performances of rough-neural networks (R-NNs) in the system identification, on the base of emotional learning, a new stable learning algorithm is developed for them. This algorithm facilitates the error convergence by increasing the memory depth of R-NNs. To this end, an emotional signal as a linear combination of identification error and its differences is used to achie...
متن کامل